275 research outputs found
Recommended from our members
The DADO Parallel Computer
DADO is a parallel, tree-structured machine designed to provide significant performance improvements in the execution of large production systems. A full-scale production version of the DADO machine would comprise a large (on the order of a hundred thousand) set of processing elements (PE's), each containing its own processor, a small amount (8K bytes, in the current prototype design) of local random access memory, and a specialized I/O switch. The PE's are interconnected to form a complete binary tree. This paper describes the organization of, and programming language for two prototypes of the DADO system. We also detail a general procedure for the parallel execution of production systems on the DADO machine and outline how this procedure can be extended to include commutative and multiple, independent production systems. We then compare this with the RETE machine matching algorithm, and indicate how PROLOG programs may be implemented directly on DADO
Recommended from our members
A Note on Implementing OPS5 Production Systems on DADO
This brief note is written in response to a recent publication, "Implementing OPS5 Production Systems on DADO," published as a Carnegie-Mellon Department of Computer Science Technical Report by Anoop Gupta in March, 1984. Gupta's paper analyzes the performance of OPS5 Production System programs on the DADO parallel computer, a special purpose production system (PS) machine. The analysis leads Gupta to conclude that DADO is not an effective OPS5 PS machine. we have studied Gupta's analysis carefully, and conclude that his conclusions are Inaccurate, flawed by, at times, incorrect or outdated information about the DADO2 prototype, and, at other times, inexact reasoning. We have divided the following into two sections. Section 2 details specific technical errors regarding the DADO2 design cited by Gupta. Although Gupta properly cites statistics we reported in earlier papers proposing DADO, his analysis is based on an earlier design of the second prototype system presently near completion at Columbia University. However, after making the changes appropriate to be consistent with the current technical design of DADO2, Gupta's analysis is also flawed in not adequately understanding the detailed workings of several reported algorithms. Section 3 focuses on philosophical differences. We shall be careful to accurately quote Gupta to strengthen our case that his conclusion is rather weak. We conclude that DADO indeed is an effective OPS5 processor. More importantly, we believe the DADO machine will produce dramatic performance Improvements of AI computation when the sequentialities inherent in OPSS are removed. The reader should first carefully read Gupta's paper and anyone of the most recent reports detailing the DADO system and algorithms, (Stolfo and Miranker, 1984), for example
Unsupervised Anomaly-based Malware Detection using Hardware Features
Recent works have shown promise in using microarchitectural execution
patterns to detect malware programs. These detectors belong to a class of
detectors known as signature-based detectors as they catch malware by comparing
a program's execution pattern (signature) to execution patterns of known
malware programs. In this work, we propose a new class of detectors -
anomaly-based hardware malware detectors - that do not require signatures for
malware detection, and thus can catch a wider range of malware including
potentially novel ones. We use unsupervised machine learning to build profiles
of normal program execution based on data from performance counters, and use
these profiles to detect significant deviations in program behavior that occur
as a result of malware exploitation. We show that real-world exploitation of
popular programs such as IE and Adobe PDF Reader on a Windows/x86 platform can
be detected with nearly perfect certainty. We also examine the limits and
challenges in implementing this approach in face of a sophisticated adversary
attempting to evade anomaly-based detection. The proposed detector is
complementary to previously proposed signature-based detectors and can be used
together to improve security.Comment: 1 page, Latex; added description for feature selection in Section 4,
results unchange
Recommended from our members
Simultaneous Firing of Production Rules on Tree Structured Machines
This paper describes a method to realize the simultaneous firing of production rules on tree c structured machines. We propose a simultaneous firing mechanism consisting of global, communication and global synchronization between subtrees. We also proposes hierarchical decomposition algorithm for production systems which maximizes total throughput by satisfying two requirements, Le. Maximizing parallel executability and minimizing global communication
Recommended from our members
Software-based Decoy System for Insider Threats
Decoy technology and the use of deception are useful in securing critical computing systems by confounding and confusing adversaries with fake information. Deception leverages uncertainty forcing adversaries to expend considerable effort to differentiate realistic useful information from purposely planted false information. In this paper, we propose software-based decoy system that aims to deceive insiders, to detect the exfiltration of proprietary source code. The proposed system generates believable Java source code that appear to an adversary to be entirely valuable proprietary software. Bogus software is generated iteratively using code obfuscation techniques to transform original software using various transformation methods. Beacons are also injected into bogus software to detect the exfiltration and to make an alert if the decoy software is touched, compiled or executed. Based on similarity measurement, the experimental results demonstrate that the generated bogus software is different from the original software while maintaining similar complexity to confuse an adversary as to which is real and which is not
Reflections on the Engineering and Operation of a Large-Scale Embedded Device Vulnerability Scanner
We present important lessons learned from the engineering and operation of a large-scale embedded device vulnerability scanner infrastructure. Developed and refined over the period of one year, our vulnerability scanner monitored large portions of the Internet and was able to identify over 1.1 million publicly accessible trivially vulnerable embedded devices. The data collected has helped us move beyond vague, anecdotal suspicions of embedded insecurity towards a realistic quantitative understanding of the current threat. In this paper, we describe our experimental methodology and reflect on key technical, organizational and social challenges encountered during our research. We also discuss several key technical design missteps and operational failures and their solutions
Recommended from our members
Symbiotes and defensive Mutualism: Moving Target Defense
If we wish to break the continual cycle of patching and replacing our core monoculture systems to defend against attacker evasion tactics, we must redesign the way systems are deployed so that the attacker can no longer glean the information about one system that allows attacking any other like system. Hence, a new poly-culture architecture that provides complete uniqueness for each distinct device would thwart many remote attacks (except perhaps for insider attacks). We believe a new security paradigm based on perpetual mutation and diversity, driven by symbiotic defensive mutualism can fundamentally change the ‘cat and mouse’ dynamic which has impeded the development of truly effective security mechanism to date. We propose this new ‘clean slate design’ principle and conjecture that this defensive strategy can also be applied to legacy systems widely deployed today. Fundamentally, the technique diversifies the defensive system of the protected host system thwarting attacks against defenses commonly executed by modern malware
Recommended from our members
Combining a Baiting and a User Search Profiling Techniques for Masquerade Detection
Masquerade attacks are characterized by an adversary stealing a legitimate user's credentials and using them to impersonate the victim and perform malicious activities, such as stealing information. Prior work on masquerade attack detection has focused on profiling legitimate user behavior and detecting abnormal behavior indicative of a masquerade attack. Like any anomaly-detection based techniques, detecting masquerade attacks by profiling user behavior suffers from a significant number of false positives. We extend prior work and provide a novel integrated detection approach in this paper. We combine a user behavior profiling technique with a baiting technique in order to more accurately detect masquerade activity. We show that using this integrated approach reduces the false positives by 36% when compared to user behavior profiling alone, while achieving almost perfect detection results. We also show how this combined detection approach serves as a mechanism for hardening the masquerade attack detector against mimicry attacks
Recommended from our members
Agent-Based Distributed Learning Applied to Fraud Detection
Inductive learning and classification techniques have been applied in many problems in diverse areas. In this paper we describe an AI-based approach that combines inductive learning algorithms and meta-learning methods as a means to compute accurate classification models for detecting electronic fraud. Inductive learning algorithms are used to compute detectors of anomalous or errant behavior over inherently distributed data sets and meta-learning methods integrate their collective knowledge into higher level classification models or "meta-classifiers". By supporting the exchange of models or "classifier agents" among data sites, our approach facilitates the cooperation between financial organizations and provides unified and cross-institution protection mechanisms against fraudulent transactions. Through experiments performed on actual credit card transaction data supplied by two different financial institutions, we evaluate this approach and we demonstrate its utility
Recommended from our members
Improving Production System Performance on Parallel Architectures by Creating Constrained Copies of Rules
Production systems have pessimistically been hypothesized to contain only minimal amounts of parallelism [Gupta 1984]. However, techniques are being investigated to extract more parallelism from existing systems. Among these methods, it is desirable to find those which balance the work being performed in parallel evenly among the rules, while at the same time decrease the amount of work which must be performed sequentially in each cycle. The technique of creating constrained copies of culprit rules accomplishes both of the above goals. Production systems are plagued by occasional rules which slow down the entire execution. These rules require much more processing than others and thus cause other processors to idle while the culprit rules continue to match. By creating the constrained copies and distributing them to their own processors, each performs less work while others are busy, yielding increased parallelism, improved load balancing, and less work overall per cycle
- …